sinkhorn divergence
Sinkhorn Barycenters with Free Support via Frank-Wolfe Algorithm
Giulia Luise, Saverio Salzo, Massimiliano Pontil, Carlo Ciliberto
We present a novel algorithm to estimate the barycenter of arbitrary probability distributions with respect to the Sinkhorn divergence. Based on a Frank-Wolfe optimization strategy, our approach proceeds by populating the support of the barycenter incrementally, without requiring any pre-allocation.
- Europe > United Kingdom > England > Greater London > London (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- North America > United States > Texas (0.04)
- (6 more...)
. Figure 1 m n 100 1000 10 29 4 s 33 6 s 50 8 1 min 9 1 min 100 15 1 min 24 2 min Table 2: Time to reach relative improvement 10
We thank the reviewers for their comments. We then address reviewer's comments individually (due to space limits please zoom in the tiny figures). For [18] we used Alg. 2 We thank the reviewer for the additional reference, which we will add to the paper. Gradient Descent) applied in parallel to multiple starting points. We thank R2 for the reference "Entropic regularization of continuous optimal transport problems".
- Europe > France > Normandy > Seine-Maritime > Rouen (0.05)
- Oceania > Australia > New South Wales > Sydney (0.04)
- North America > United States > District of Columbia > Washington (0.04)
- (4 more...)
- North America > United States > New York > New York County > New York City (0.05)
- North America > United States > California > Los Angeles County > Long Beach (0.04)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
- (2 more...)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- Oceania > Australia > New South Wales > Sydney (0.04)
- North America > United States > Texas > Brazos County > College Station (0.04)
- (2 more...)
- North America > United States > California > Los Angeles County > Long Beach (0.04)
- North America > Canada > Ontario > Toronto (0.04)
Don't Generate Me: Training Differentially Private Generative Models with Sinkhorn Divergence
Although machine learning models trained on massive data have led to breakthroughs in several areas, their deployment in privacy-sensitive domains remains limited due to restricted access to data. Generative models trained with privacy constraints on private data can sidestep this challenge, providing indirect access to private data instead. We propose DP-Sinkhorn, a novel optimal transport-based generative method for learning data distributions from private data with differential privacy. DP-Sinkhorn minimizes the Sinkhorn divergence, a computationally efficient approximation to the exact optimal transport distance, between the model and data in a differentially private manner and uses a novel technique for controlling the bias-variance trade-off of gradient estimates. Unlike existing approaches for training differentially private generative models, which are mostly based on generative adversarial networks, we do not rely on adversarial objectives, which are notoriously difficult to optimize, especially in the presence of noise imposed by privacy constraints. Hence, DP-Sinkhorn is easy to train and deploy. Experimentally, we improve upon the state-of-the-art on multiple image modeling benchmarks and show differentially private synthesis of informative RGB images.
- North America > Canada > Ontario > Toronto (0.14)
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > California > San Diego County > San Diego (0.04)
- Europe > Sweden > Stockholm > Stockholm (0.04)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
- Europe > Austria > Vienna (0.04)